HowTo: Setup the Raspberry Pico with Prometheus & Grafana

Version 1.2
- 14JUN2022 Added a few sections, fix some code errors
- 12JUN2022 Debug mode for prom_client, Grafana updates, grammar & typo checks
- 08JUN2022 Initial release 

Objectives

The purpose of this guide is to setup a Raspberry Pico to acquire environmental telemetry. The data from the Pico is then gathered by a Prometheus instance and visualized through a Grafana dashboard.

Assumptions

Before commencing with this HowTo, there are some expectations with regards to prior knowledge, expertise and resource expectations.

Requirements

Software versions current as on the time of writing this HowTo. The TelemMonitor.7z archive contains the items indicated with * and all of the starting code for this HowTo

Setting up the Pico

Warning

Do not insert electronic components or other devices into the breadboard when the Pico is connected to a computer.

For the Pico to do something useful, we will connect a DHT11 (Temperature and humidity sensor) to the Pico. This means not only will we require the device but also the CircuitPython package to interface to the DHT11. Assuming the Pico is inserted into a breadboard, insert the DHT11 and connect the power pins (+,-) to the Pico and the data pin (middle/data out) to GP16 (physical pin 21). Then connect the Pico to the computer with a suitable serial cable.

The pico telemetry folder contains a snapshot of the required libraries (DHT11 & NRF24L01) and the boot.py and code.py.

You can download an updated copy of the adafruit_dht library and copy it to the lib folder.

Using Circup

You can use Circup to install the DHT and NRF24L01 libraries instead of manually installing them. The same tool can be used to maintain the CircuitPython libraries o the Pico too.

The Pico will have two code files (and libraries as per screenshot below);

# boot.py - initialise USB to support import usb_cdc # enable data over the USB port usb_cdc.enable(console=True, data=True)

# code.py - The main telemetry acquisition code import time import struct import board import digitalio import usb_cdc import adafruit_dht from microcontroller import cpu # access the UART data over USB uart = usb_cdc.data # LED blink for diagnostics led = digitalio.DigitalInOut(board.GP25) led.direction = digitalio.Direction.OUTPUT # attach the DHT to the board pin dht = adafruit_dht.DHT11(board.GP16) # data packet/buffer buffer = [0] while True: try: h = int(dht.humidity) t = dht.temperature c = int(cpu.temperature) buffer = struct.pack("<iii", h, t, c) # 3x integers (4bytes per integer) 12bytes data packet uart.write(buffer) # diagnostics to the REPL console for monitoring the telemetry print(buffer) except RuntimeError as error: print("Transmission or data error") continue except Exception as error: dht.exit() raise error led.value = not led.value time.sleep(1)

Sample Diagnostics in the Console

If you monitor the Pico in a console (mu editor REPL or putty) you should get output similar to the following:

b'1\x00\x00\x00\x17\x00\x00\x00\x12\x00\x00\x00'
b'1\x00\x00\x00\x17\x00\x00\x00\x11\x00\x00\x00'
b'0\x00\x00\x00\x17\x00\x00\x00\x11\x00\x00\x00'
b'0\x00\x00\x00\x17\x00\x00\x00\x12\x00\x00\x00'

This does not confirm the data is working. If you use putty then you can then monitor the data port of the Pico.

Pico Data Port

When you connect the Pico to the computer it should create two additional serial ports. Under windows this will be two COM ports in the device manager as per the example below. The 2nd COM port is the data port. If you use putty to connect to this port, you should see some data showing up.

Pico COM ports in Windows device manager:

Pico Putty configuration:

Pico sample data output:

If you see "random" looking data in the putty console, then the Pico is transmitting data over the USB in serial mode as expected.

The serial COM ports in Linux

Under Linux these serial ports are generally called tty. You can read more about them here.

Prometheus Systems Monitoring and Alerting

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Prometheus collects and stores its metrics as time series data, in other words, metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.

Pico to Prometheus client

Before we start using Prometheus we need to create a Prometheus instrumenting client. This connects to the Pico via the serial COM interface and generates metrics suitable for Prometheus. We will use the client from the starter code. You will need to install a few Python libraries (not CircuitPython versions) before attempting to use the client code, namely:

pip install prometheus_client
pip install pyserial

The first library provides our code with the necessary interfacing to Prometheus. The second library lets our Prometheus client communicate to the Pico over a serial interface.

# prom_client.py - Prometheus client data acquisition and publishing ''' DO NOT put this file onto the Pico This file is to be used with Prometheus & Grafana ''' import random import struct import time import serial # pip install pyserial # pip install prometheus_client from prometheus_client import Gauge, start_http_server # use this flag to enable the demo data mode, used when a Pico is not available DEMO=False # NB open the CORRECT COM port - this is the data port of the Pico. try: # Open port in exclusive non blocking mode if not DEMO: uart = serial.Serial("COM4", 9600, timeout=0, exclusive=True) except Exception as error: print("Unable to connect to the Pico. Check the Pico connection and if the port number is correct") exit() # register 3 Prometheus metrics as Gauge types G1 = Gauge('my_humidity', 'DHT11 Ambient Humidity') G2 = Gauge('my_temperature', 'DHT11 Ambient Temperature') G3 = Gauge('my_cputemp', 'Pico CPU temperature') # read in the Pico data and update the metrics def process_request(): # create a demo data generator for testing Prometheus & Grafana without the Pico device if DEMO: # humidity, temp, cputemp datain = struct.pack("<iii",random.randint(30,80), random.randint(0,50), random.randint(15,60)) else: datain = uart.read(12) # incoming data 3 x integers @ 4Bytes each (32 bit) siz = len(datain) # make sure we have 12 bytes when reading the serial data as we have the serial seup # in non blocking mode above if siz == 12: print(siz, datain) # diagnostics - this should display the data as it is received humidity, temp, cputemp = struct.unpack("<iii", datain) G1.set(humidity) G2.set(temp) G3.set(cputemp) time.sleep(1) # browse to http://localhost:8080 # Start the HTTP service and publish the metrics if __name__ == '__main__': start_http_server(8080) while True: process_request()

A copy of this file is in the root of the TelemMonitor folder called prom_client.py. You can either run it straight from the command line or for within VScode or Thonny. Do note this client runs as a daemon and listens on port 8080. Change the listening port to suit your needs.

Before starting the client, make sure the Pico is still connected to the computer. Start the client like this from a PowerShell prompt:

Confirm python is working

python --version

Start the client

cd c:\TelemMonitor
python .\prom_client.py

In this code we created three metrics, one for each of the telemetry items we receive from the Pico. Take note of the name of the three metrics prefixed with my_ as we are going to refer to them later in Prometheus and Grafana.

Once the daemon has started you can browse to http://localhost:8080/metrics to monitor the incoming metrics. Once you browse to the URL you should see telemetry similar to the following:

    # HELP python_gc_objects_collected_total Objects collected during gc
    # TYPE python_gc_objects_collected_total counter
    python_gc_objects_collected_total{generation="0"} 298.0
    python_gc_objects_collected_total{generation="1"} 76.0
    python_gc_objects_collected_total{generation="2"} 0.0
    
    ...

    # HELP my_humidity DHT11 Ambient Humidity
    # TYPE my_humidity gauge
    my_humidity 49.0
    # HELP my_temperature DHT11 Ambient Temperature
    # TYPE my_temperature gauge
    my_temperature 23.0
    # HELP my_cputemp Pico CPU temperature
    # TYPE my_cputemp gauge
    my_cputemp 19.0

Note the three metrics are at the tail end of the output. You can refresh the page see if the values change over time. This is the data that is then ingested by Prometheus.

Stopping the prom_client.py

Owing to the client running as a daemon, you do need to make sure you have a means to stop it if you are not using Python from the PowerShell cli - this would be ctrl-c for PowerShell.

Prometheus Server

The Prometheus folder contains a copy of the Prometheus as a portable distribution. The goprom.cmd barch file can be used to start the Prometheus server.

You can download a newer version of Prometheus from here to match your operating system.

A working copy of Prometheus is available in the prometheus folder. The main part we are interested in at this point is the configuration. This is kept in the root of the Prometheus folder with the filename of prometheus.yml. Be careful with editing yaml files as the parsers can get particular about the layout. Open the provided file and have a look at the end of the file for the following information:

scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: "prometheus" # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: # Prometheus internal metrics - targets: ["localhost:9090"] # Metrics from the prom_client.py - Pico - targets: ["localhost:8080"] labels: group: "Environment"

The section of note of the static_configs and specifically where we have informed Prometheus about out Pico client, the "- targets: ["localhost:8080"]" part. This was added to the configuration for the purposes of this HowTo. Prometheus needs to be informed about the data sources it needs to scrape.

The other configuration of note is the "- targets: ["localhost:9090"]" whereby Prometheus publishes its own metrics. We can use this to confirm Prometheus is operational.

Feel free to add your own. You can find more information about the configuration here.

You can use the provided batch file goprom.cmd to start the Prometheus server. Once it has started you can browse to http://localhost:9090/metrics. This should give you a long page of metrics for Prometheus similar to what we had for our Pico client. The next URL to to use, and the one we are interested in, is http://localhost:9090. This should bring up the Prometheus metric query page as shown below.

Start typing in my_ and you should get a list of our 3 Pico metrics as per the screenshot below. This tels us Prometheus is receiving our data.

Select one of the metrics and click Execute on the right. You should now have the metric value read from the Pico & DHT11. Below we have an example of the Pico CPU temperature. Feel free to try out the other two metrics.

Grafana Visualization and Analytics Software

Grafana open source is open source visualization and analytics software. It allows you to query, visualize, alert on, and explore your metrics, logs, and traces no matter where they are stored. It provides you with tools to turn your time-series database (TSDB) data into insightful graphs and visualizations.

You can download a newer version of Grafana from here to match your operating system.

Note

We are using the self-managed version of Grafana and not the cloud based version.

A working copy of Grafana is available in the grafana folder. As with Prometheus, the part we are interested in with the Grafana is the configuration. This is usually kept in the TelemMonitor\grafana\conf (or equivalent for your operating system) folder with the filename of custom.ini. Normally this file does not exist in a new installation so you have to make a copy of the sample.ini for this. Be careful with editing this files as it is fairly lengthy and easy to break some configuration. Use Notepad++ or VScode (syntax highlighting helps immensely) to edit the custom.ini. There are a couple of places of interest in this file that we need to take note of for this HowTo.

Around line 41 is the Server port - default is 3000 but you can use your own to suit your environment. This listening port is what we are going to browse tn. Uncomment it out if need be.

# The http port to use http_port = 3000

Around lines 245 & 248 is the login credentials. We are going to keep this simple here but you would use stronger authentication in a production operation.

# default admin user, created on startup admin_user = admin # default admin password, can be changed before first start of grafana, or in profile settings admin_password = admin

We can leave the rest of the configuration file alone for now. Do have a look at the getting started guide for more on the Grafana setup.

While the Pico client and Prometheus are running, you can use the provided batch file gograf.cmd to start the Grafana server. Once it has started you can browse to http://localhost:3000. This should bring up the Grafana dashboard. Normally you would go ahead and create a dashboard or import an existing one. For the Grafana used in this HowTo we have already created one with 3 panels. This can be seen below as Environmental values from Pico over UART.

Note

If the Environmental values from Pico over UART dashboard is not visible then click on the icon then Browse. You should see a list of available dashboards and you can then select Environmental values from Pico over UART.

Clicking on the Environmental values from Pico over UART dashboard will bring up our already created dashboard for our Pico.

Creating a new Grafana Dashboard

Dont forget to add a datasource for your dashboard

Before you start adding or creating dashboards, make sure you have added the required datasource as per the screenshot below

Grafana is probably the easiest way to get started with metrics, logs, traces, and dashboards. To create your own metrics dashboard, start by clinking on the new Grafana dashboard icon which should bring up the following.

You then start by adding a panel as show below. From here on in it is up to you to navigate and familiarise yourself with this interface. The main areas of interest are:

Once you have created a number of panels, don't forget to rename your panels and the dashboard too. Save your work!

Some Grafana dashboard guides

You can visit Grafana dashboards: A complete guide to all the different types you can build for an easy guide to creating your own Grafana dashboards. Another example can be found here

Refer to the Grafana manual for more information about creating dashboards and adding panels.

End-of-file